Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
ACM International Conference Proceeding Series ; : 38-45, 2022.
Article in English | Scopus | ID: covidwho-20238938

ABSTRACT

The CT images of lungs of COVID-19 patients have distinct pathological features, segmenting the lesion area accurately by the method of deep learning, which is of great significance for the diagnosis and treatment of COVID-19 patients. Instance segmentation has higher sensitivity and can output the Bounding Boxes of the lesion region, however, the traditional instance segmentation method is weak in the segmentation of small lesions, and there is still room for improvement in the segmentation accuracy. We propose a instance segmentation network which is called as Semantic R-CNN. Firstly, a semantic segmentation branch is added on the basis of Mask-RCNN, and utilizing the image processing tool Skimage in Python to label the connected domain for the result of semantic segmentation, extracting the rectangular boundaries of connected domain and using them as Proposals, which will replace the Regional Proposal Network in the instance segmentation. Secondly, the Atrous Spatial Pyramid Pooling is introduced into the Feature Pyramid Network, then improving the feature fusion method in FPN. Finally, the cascade method is introduced into the detection branch of the network to optimize the Proposals. Segmentation experiments were carried out on the pathological lesion segmentation data set of CC-CCII, the average accuracy of the semantic segmentation is 40.56mAP, and compared with the Mask-RCNN, it has improved by 9.98mAP. After fusing the results of semantic segmentation and instance segmentation, the Dice coefficient is 80.7%, the sensitivity is 85.8%, and compared with the Inf-Net, it has increased by 1.6% and 8.06% respectively. The proposed network has improved the segmentation accuracy and reduced the false-negatives. © 2022 ACM.

2.
Biomed Signal Process Control ; 85: 104974, 2023 Aug.
Article in English | MEDLINE | ID: covidwho-2302920

ABSTRACT

An automatic method for qualitative and quantitative evaluation of chest Computed Tomography (CT) images is essential for diagnosing COVID-19 patients. We aim to develop an automated COVID-19 prediction framework using deep learning. We put forth a novel Deep Neural Network (DNN) composed of an attention-based dense U-Net with deep supervision for COVID-19 lung lesion segmentation from chest CT images. We incorporate dense U-Net where convolution kernel size 5×5 is used instead of 3×3. The dense and transition blocks are introduced to implement a densely connected network on each encoder level. Also, the attention mechanism is applied between the encoder, skip connection, and decoder. These are used to keep both the high and low-level features efficiently. The deep supervision mechanism creates secondary segmentation maps from the features. Deep supervision combines secondary supervision maps from various resolution levels and produces a better final segmentation map. The trained artificial DNN model takes the test data at its input and generates a prediction output for COVID-19 lesion segmentation. The proposed model has been applied to the MedSeg COVID-19 chest CT segmentation dataset. Data pre-processing methods help the training process and improve performance. We compare the performance of the proposed DNN model with state-of-the-art models by computing the well-known metrics: dice coefficient, Jaccard coefficient, accuracy, specificity, sensitivity, and precision. As a result, the proposed model outperforms the state-of-the-art models. This new model may be considered an efficient automated screening system for COVID-19 diagnosis and can potentially improve patient health care and management system.

3.
Multimed Syst ; : 1-14, 2023 Apr 19.
Article in English | MEDLINE | ID: covidwho-2290683

ABSTRACT

The coronavirus disease 2019, initially named 2019-nCOV (COVID-19) has been declared a global pandemic by the World Health Organization in March 2020. Because of the growing number of COVID patients, the world's health infrastructure has collapsed, and computer-aided diagnosis has become a necessity. Most of the models proposed for the COVID-19 detection in chest X-rays do image-level analysis. These models do not identify the infected region in the images for an accurate and precise diagnosis. The lesion segmentation will help the medical experts to identify the infected region in the lungs. Therefore, in this paper, a UNet-based encoder-decoder architecture is proposed for the COVID-19 lesion segmentation in chest X-rays. To improve performance, the proposed model employs an attention mechanism and a convolution-based atrous spatial pyramid pooling module. The proposed model obtained 0.8325 and 0.7132 values of the dice similarity coefficient and jaccard index, respectively, and outperformed the state-of-the-art UNet model. An ablation study has been performed to highlight the contribution of the attention mechanism and small dilation rates in the atrous spatial pyramid pooling module.

4.
9th International Forum on Digital Multimedia Communication, IFTC 2022 ; 1766 CCIS:377-390, 2023.
Article in English | Scopus | ID: covidwho-2269784

ABSTRACT

Coronavirus disease 2019 (COVID-19) has been spreading since late 2019, leading the world into a serious health crisis. To control the spread rate of infection, identifying patients accurately and quickly is the most crucial step. Computed tomography (CT) images of the chest are an important basis for diagnosing COVID-19. They also allow doctors to understand the details of the lung infection. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction capabilities, deep learning-based method has been widely used for automatic lesion segmentation of COVID-19 CT images. But, the segmentation accuracy of these methods is still limited. To effectively quantify the severity of lung infections, we propose a Sobel operator combined with Multi-Attention networks for COVID-19 lesion segmentation (SMA-Net). In our SMA-Net, an edge feature fusion module uses Sobel operator to add edge detail information to the input image. To guide the network to focus on key regions, the SMA-Net introduces a self-attentive channel attention mechanism and a spatial linear attention mechanism. In addition, Tversky loss function is adopted for the segmentation network for small size of lesions. Comparative experiments on COVID-19 public datasets show that the average Dice similarity coefficient (DSC) and joint intersection over Union (IOU) of proposed SMA-Net are 86.1% and 77.8%, respectively, which are better than most existing neural networks used for COVID-19 lesion segmentation. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

5.
IEEE Transactions on Emerging Topics in Computational Intelligence ; : 1-14, 2023.
Article in English | Scopus | ID: covidwho-2257266

ABSTRACT

COVID-19-like pandemics are a major threat to the global health system that causes a lot of deaths across ages. Large-scale medical images (i.e., X-rays, computed tomography (CT)) dataset is favored to the accuracy of deep learning (DL) in the screening of COVID-19-like pneumonia. The cost, time, and efforts for acquiring and annotating, for instance, large CT datasets make it impossible to obtain large numbers of samples from a single institution. The research attentions have been moved toward sharing medical images from numerous medical institutions. However, owing to the necessity to preserve the privacy of the data of a patient, it is challenging to build a centralized dataset from many institutions, especially during the pandemic. More. The difference in the data acquisition process from one institution to another brings another challenge known as distribution heterogeneity. This paper presents a novel federated learning framework, called Federated Multi-Site COVID-19 (FEDMSCOV), for efficient, generalizable, and privacy-preserved segmentation of COVID-19 infection from multi-site data. In FEDMSCOV, a novel is local drift smoothing (LDS) module encodes the input from feature space to frequency space, aiming to suppress the modules that are not conducive to generalization. Given the smoothed local updated, FEDMSCOV presents a novel Mixture-of-Expert (MoE) scheme to resolve global shift in parameters. An adapted differential privacy method is applied to design and protect the privacy of local updates during the training. Experimental evaluation on a large-scale multi-institutional COVID-19 dataset demonstrated the efficiency of the proposed framework over competing learning approaches with statistical significance. IEEE

6.
Biomed Signal Process Control ; 85: 104905, 2023 Aug.
Article in English | MEDLINE | ID: covidwho-2278569

ABSTRACT

Purpose: A semi-supervised two-step methodology is proposed to obtain a volumetric estimation of COVID-19-related lesions on Computed Tomography (CT) images. Methods: First, damaged tissue was segmented from CT images using a probabilistic active contours approach. Second, lung parenchyma was extracted using a previously trained U-Net. Finally, volumetric estimation of COVID-19 lesions was calculated considering the lung parenchyma masks.Our approach was validated using a publicly available dataset containing 20 CT COVID-19 images previously labeled and manually segmented. Then, it was applied to 295 COVID-19 patients CT scans admitted to an intensive care unit. We compared the lesion estimation between deceased and survived patients for high and low-resolution images. Results: A comparable median Dice similarity coefficient of 0.66 for the 20 validation images was achieved. For the 295 images dataset, results show a significant difference in lesion percentages between deceased and survived patients, with a p-value of 9.1 × 10-4 in low-resolution and 5.1 × 10-5 in high-resolution images. Furthermore, the difference in lesion percentages between high and low-resolution images was 10 % on average. Conclusion: The proposed approach could help estimate the lesion size caused by COVID-19 in CT images and may be considered an alternative to getting a volumetric segmentation for this novel disease without the requirement of large amounts of COVID-19 labeled data to train an artificial intelligence algorithm. The low variation between the estimated percentage of lesions in high and low-resolution CT images suggests that the proposed approach is robust, and it may provide valuable information to differentiate between survived and deceased patients.

7.
Sensors (Basel) ; 23(5)2023 Feb 24.
Article in English | MEDLINE | ID: covidwho-2269783

ABSTRACT

Medical images are used as an important basis for diagnosing diseases, among which CT images are seen as an important tool for diagnosing lung lesions. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction capabilities, a deep learning-based method has been widely used for automatic lesion segmentation of COVID-19 CT images. However, the segmentation accuracy of these methods is still limited. To effectively quantify the severity of lung infections, we propose a Sobel operator combined with multi-attention networks for COVID-19 lesion segmentation (SMA-Net). In our SMA-Net method, an edge feature fusion module uses the Sobel operator to add edge detail information to the input image. To guide the network to focus on key regions, SMA-Net introduces a self-attentive channel attention mechanism and a spatial linear attention mechanism. In addition, the Tversky loss function is adopted for the segmentation network for small lesions. Comparative experiments on COVID-19 public datasets show that the average Dice similarity coefficient (DSC) and joint intersection over union (IOU) of the proposed SMA-Net model are 86.1% and 77.8%, respectively, which are better than those in most existing segmentation networks.


Subject(s)
COVID-19 , Labor, Obstetric , Pregnancy , Female , Humans , Image Processing, Computer-Assisted
8.
Methods ; 205: 200-209, 2022 09.
Article in English | MEDLINE | ID: covidwho-2255505

ABSTRACT

BACKGROUND: Lesion segmentation is a critical step in medical image analysis, and methods to identify pathology without time-intensive manual labeling of data are of utmost importance during a pandemic and in resource-constrained healthcare settings. Here, we describe a method for fully automated segmentation and quantification of pathological COVID-19 lung tissue on chest Computed Tomography (CT) scans without the need for manually segmented training data. METHODS: We trained a cycle-consistent generative adversarial network (CycleGAN) to convert images of COVID-19 scans into their generated healthy equivalents. Subtraction of the generated healthy images from their corresponding original CT scans yielded maps of pathological tissue, without background lung parenchyma, fissures, airways, or vessels. We then used these maps to construct three-dimensional lesion segmentations. Using a validation dataset, Dice scores were computed for our lesion segmentations and other published segmentation networks using ground truth segmentations reviewed by radiologists. RESULTS: The COVID-to-Healthy generator eliminated high Hounsfield unit (HU) voxels within pulmonary lesions and replaced them with lower HU voxels. The generator did not distort normal anatomy such as vessels, airways, or fissures. The generated healthy images had higher gas content (2.45 ± 0.93 vs 3.01 ± 0.84 L, P < 0.001) and lower tissue density (1.27 ± 0.40 vs 0.73 ± 0.29 Kg, P < 0.001) than their corresponding original COVID-19 images, and they were not significantly different from those of the healthy images (P < 0.001). Using the validation dataset, lesion segmentations scored an average Dice score of 55.9, comparable to other weakly supervised networks that do require manual segmentations. CONCLUSION: Our CycleGAN model successfully segmented pulmonary lesions in mild and severe COVID-19 cases. Our model's performance was comparable to other published models; however, our model is unique in its ability to segment lesions without the need for manual segmentations.


Subject(s)
COVID-19 , Image Processing, Computer-Assisted , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Tomography, X-Ray Computed/methods
9.
2022 International Conference on Image Processing, Computer Vision and Machine Learning, ICICML 2022 ; : 410-415, 2022.
Article in English | Scopus | ID: covidwho-2233224

ABSTRACT

Coronavirus disease (COVID-19) poses a significant threat to humans in 2019. Automated and accurate segmentation of the infected region of COVID-19 computed tomography (CT) images can help doctors diagnose and treat the disease. However, the variable shape of COVID-19 infected areas, which can be easily confused with other lung tissues, poses a challenge for CT image segmentation. To address this problem, a deep learning-based convolutional neural network is proposed for the automatic segmentation of COVID-19 lung infection regions. Our proposed segmentation method uses a U-Net network as the backbone, constructed as a coarse to fine segmentation network. Firstly, we introduce our designed contour-enhanced module (CA) in the coarse segmentation network to effectively extract the lung region;secondly, we introduce our designed multi-scale feature attention module (MFA) in the fine segmentation network to enable the network to extract spatial efficiently and channel information, better focus on quantifying the effective region, and enhance the network segmentation effect. Using the COVID-19 public dataset, the proposed network achieves the best segmentation results. The Dice, IOU, F1-Score, and Sensitivity metrics reach 88.74%, 78.73%, 86.58%, and 88.16%, respectively. DCA-Net can efficiently segment the COVID-19 infected region, which can be of great clinical benefit. © 2022 IEEE.

10.
30th European Signal Processing Conference, EUSIPCO 2022 ; 2022-August:1362-1366, 2022.
Article in English | Scopus | ID: covidwho-2101855

ABSTRACT

Deep learning has shown remarkable promise in medical imaging tasks, reaching an expert level of performance for some diseases. However, these models often fail to generalize properly to data not used during training, which is a major roadblock to successful clinical deployment. This paper proposes a generalization enhancement approach that can mitigate the gap between source and unseen data in deep learning-based segmentation models without using ground-truth masks of the target domain. Leveraging a subset of unseen domain's CT slices for which the model trained on the source data yields the most confident predictions and their predicted masks, the model learns helpful features of the unseen data over a retraining process. We investigated the effectiveness of the introduced method over three rounds of experiments on three open-access COVID-19 lesion segmentation datasets, and the results illustrate constant improvements of the segmentation model performance on datasets not seen during training. © 2022 European Signal Processing Conference, EUSIPCO. All rights reserved.

11.
Front Physiol ; 13: 981463, 2022.
Article in English | MEDLINE | ID: covidwho-2022850

ABSTRACT

Owing to its significant contagion and mutation, the new crown pneumonia epidemic has caused more than 520 million infections worldwide and has brought irreversible effects on the society. Computed tomography (CT) images can clearly demonstrate lung lesions of patients. This study used deep learning techniques to assist doctors in the screening and quantitative analysis of this disease. Consequently, this study will help to improve the diagnostic efficiency and reduce the risk of infection. In this study, we propose a new method to improve U-Net for lesion segmentation in the chest CT images of COVID-19 patients. 750 annotated chest CT images of 150 patients diagnosed with COVID-19 were selected to classify, identify, and segment the background area, lung area, ground glass opacity, and lung parenchyma. First, to address the problem of a loss of lesion detail during down sampling, we replaced part of the convolution operation with atrous convolution in the encoder structure of the segmentation network and employed convolutional block attention module (CBAM) to enhance the weighting of important feature information. Second, the Swin Transformer structure is introduced in the last layer of the encoder to reduce the number of parameters and improve network performance. We used the CC-CCII lesion segmentation dataset for training and validation of the model effectiveness. The results of ablation experiments demonstrate that this method achieved significant performance gain, in which the mean pixel accuracy is 87.62%, mean intersection over union is 80.6%, and dice similarity coefficient is 88.27%. Further, we verified that this model achieved superior performance in comparison to other models. Thus, the method proposed herein can better assist doctors in evaluating and analyzing the condition of COVID-19 patients.

12.
14th IEEE International Conference on Signal Processing and Communications, SPCOM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2018987

ABSTRACT

Computed Tomography (CT) based analysis will assist doctors in a prompt diagnosis of the Covid-19 infection. Automated segmentation of lesions in chest CT scans helps in determining the severity of the infection. The presented work addresses the task of automated segmentation of Covid-19 lesions. A U-Net framework incorporated with spatial-channel attention modules (contextual relationships), Atrous Spatial Pyramid Pooling module (a wider receptive field) and Deep Supervision (lesion focus, less error propagation) is proposed. Focal Tversky Loss is used to evaluate the outputs at coarser scales while Tversky loss evaluates the final segmentation output. This combination of losses is used to enhance segmentation of the small lesions. The framework is trained on CT scans of 20 subjects of COVID19 CT Lung and Infection Segmentation Dataset and tested on Mosmed dataset of 50 subjects, where infection has affected less than 25% of lung parenchyma. The experimental results show that the proposed method is effective in segmenting the hard ROIs in Mosmed data resulting in a mean Dice score of 0.57 (9% more than the state-of-the-art). © 2022 IEEE.

13.
J Med Imaging (Bellingham) ; 9(5): 054001, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-2019653

ABSTRACT

Purpose: Quantitative lung measures derived from computed tomography (CT) have been demonstrated to improve prognostication in coronavirus disease 2019 (COVID-19) patients but are not part of clinical routine because the required manual segmentation of lung lesions is prohibitively time consuming. We aim to automatically segment ground-glass opacities and high opacities (comprising consolidation and pleural effusion). Approach: We propose a new fully automated deep-learning framework for fast multi-class segmentation of lung lesions in COVID-19 pneumonia from both contrast and non-contrast CT images using convolutional long short-term memory (ConvLSTM) networks. Utilizing the expert annotations, model training was performed using five-fold cross-validation to segment COVID-19 lesions. The performance of the method was evaluated on CT datasets from 197 patients with a positive reverse transcription polymerase chain reaction test result for SARS-CoV-2, 68 unseen test cases, and 695 independent controls. Results: Strong agreement between expert manual and automatic segmentation was obtained for lung lesions with a Dice score of 0.89 ± 0.07 ; excellent correlations of 0.93 and 0.98 for ground-glass opacity (GGO) and high opacity volumes, respectively, were obtained. In the external testing set of 68 patients, we observed a Dice score of 0.89 ± 0.06 as well as excellent correlations of 0.99 and 0.98 for GGO and high opacity volumes, respectively. Computations for a CT scan comprising 120 slices were performed under 3 s on a computer equipped with an NVIDIA TITAN RTX GPU. Diagnostically, the automated quantification of the lung burden % discriminate COVID-19 patients from controls with an area under the receiver operating curve of 0.96 (0.95-0.98). Conclusions: Our method allows for the rapid fully automated quantitative measurement of the pneumonia burden from CT, which can be used to rapidly assess the severity of COVID-19 pneumonia on chest CT.

14.
J Real Time Image Process ; 19(6): 1091-1104, 2022.
Article in English | MEDLINE | ID: covidwho-2007237

ABSTRACT

The novel coronavirus pneumonia (COVID-19) is the world's most serious public health crisis, posing a serious threat to public health. In clinical practice, automatic segmentation of the lesion from computed tomography (CT) images using deep learning methods provides an promising tool for identifying and diagnosing COVID-19. To improve the accuracy of image segmentation, an attention mechanism is adopted to highlight important features. However, existing attention methods are of weak performance or negative impact to the accuracy of convolutional neural networks (CNNs) due to various reasons (e.g. low contrast of the boundary between the lesion and the surrounding, the image noise). To address this issue, we propose a novel focal attention module (FAM) for lesion segmentation of CT images. FAM contains a channel attention module and a spatial attention module. In the spatial attention module, it first generates rough spatial attention, a shape prior of the lesion region obtained from the CT image using median filtering and distance transformation. The rough spatial attention is then input into two 7 × 7 convolution layers for correction, achieving refined spatial attention on the lesion region. FAM is individually integrated with six state-of-the-art segmentation networks (e.g. UNet, DeepLabV3+, etc.), and then we validated these six combinations on the public dataset including COVID-19 CT images. The results show that FAM improve the Dice Similarity Coefficient (DSC) of CNNs by 2%, and reduced the number of false negatives (FN) and false positives (FP) up to 17.6%, which are significantly higher than that using other attention modules such as CBAM and SENet. Furthermore, FAM significantly improve the convergence speed of the model training and achieve better real-time performance. The codes are available at GitHub (https://github.com/RobotvisionLab/FAM.git).

15.
Diagnostics (Basel) ; 12(8)2022 Jul 26.
Article in English | MEDLINE | ID: covidwho-1957249

ABSTRACT

BACKGROUND: Automated segmentation of COVID-19 infection lesions and the assessment of the severity of the infections are critical in COVID-19 diagnosis and treatment. Based on a large amount of annotated data, deep learning approaches have been widely used in COVID-19 medical image analysis. However, the number of medical image samples is generally huge, and it is challenging to obtain enough annotated medical images for training a deep CNN model. METHODS: To address these challenges, we propose a novel self-supervised deep learning method for automated segmentation of COVID-19 infection lesions and assessing the severity of infection, which can reduce the dependence on the annotation of the training samples. In the proposed method, first, many unlabeled data are used to pre-train an encoder-decoder model to learn rotation-dependent and rotation-invariant features. Then, a small amount of labeled data is used to fine-tune the pre-trained encoder-decoder for COVID-19 severity classification and lesion segmentation. RESULTS: The proposed methods were tested on two public COVID-19 CT datasets and one self-built dataset. Accuracy, precision, recall, and F1-score were used to measure classification performance and Dice coefficient was used to measure segmentation performance. For COVID-19 severity classification, the proposed method outperformed other unsupervised feature learning methods by about 7.16% in accuracy. For segmentation, when the amount of labeled data was 100%, the Dice value of the proposed method was 5.58% higher than that of U-Net.; in 70% of the cases, our method was 8.02% higher than U-Net; in 30% of the cases, our method was 11.88% higher than U-Net; and in 10% of the cases, our method was 16.88% higher than U-Net. CONCLUSIONS: The proposed method provides better classification and segmentation performance under limited labeled data than other methods.

16.
Med Biol Eng Comput ; 60(9): 2721-2736, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-1935853

ABSTRACT

COVID-19 has been spreading continuously since its outbreak, and the detection of its manifestations in the lung via chest computed tomography (CT) imaging is essential to investigate the diagnosis and prognosis of COVID-19 as an indispensable step. Automatic and accurate segmentation of infected lesions is highly required for fast and accurate diagnosis and further assessment of COVID-19 pneumonia. However, the two-dimensional methods generally neglect the intraslice context, while the three-dimensional methods usually have high GPU memory consumption and calculation cost. To address these limitations, we propose a two-stage hybrid UNet to automatically segment infected regions, which is evaluated on the multicenter data obtained from seven hospitals. Moreover, we train a 3D-ResNet for COVID-19 pneumonia screening. In segmentation tasks, the Dice coefficient reaches 97.23% for lung segmentation and 84.58% for lesion segmentation. In classification tasks, our model can identify COVID-19 pneumonia with an area under the receiver-operating characteristic curve value of 0.92, an accuracy of 92.44%, a sensitivity of 93.94%, and a specificity of 92.45%. In comparison with other state-of-the-art methods, the proposed approach could be implemented as an efficient assisting tool for radiologists in COVID-19 diagnosis from CT images.


Subject(s)
COVID-19 , COVID-19/diagnostic imaging , COVID-19 Testing , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Tomography, X-Ray Computed/methods
17.
Knowl Based Syst ; 252: 109278, 2022 Sep 27.
Article in English | MEDLINE | ID: covidwho-1907530

ABSTRACT

Coronavirus Disease 2019 (COVID-19) still presents a pandemic trend globally. Detecting infected individuals and analyzing their status can provide patients with proper healthcare while protecting the normal population. Chest CT (computed tomography) is an effective tool for screening of COVID-19. It displays detailed pathology-related information. To achieve automated COVID-19 diagnosis and lung CT image segmentation, convolutional neural networks (CNNs) have become mainstream methods. However, most of the previous works consider automated diagnosis and image segmentation as two independent tasks, in which some focus on lung fields segmentation and the others focus on single-lesion segmentation. Moreover, lack of clinical explainability is a common problem for CNN-based methods. In such context, we develop a multi-task learning framework in which the diagnosis of COVID-19 and multi-lesion recognition (segmentation of CT images) are achieved simultaneously. The core of the proposed framework is an explainable multi-instance multi-task network. The network learns task-related features adaptively with learnable weights, and gives explicable diagnosis results by suggesting local CT images with lesions as additional evidence. Then, severity assessment of COVID-19 and lesion quantification are performed to analyze patient status. Extensive experimental results on real-world datasets show that the proposed framework outperforms all the compared approaches for COVID-19 diagnosis and multi-lesion segmentation.

18.
Medical Imaging 2022: Image Processing ; 12032, 2022.
Article in English | Scopus | ID: covidwho-1901888

ABSTRACT

We propose a fast and robust multi-class deep learning framework for segmenting COVID-19 lesions: Ground-Glass opacities and High opacities (including consolidations and pleural effusion), from non-contrast CT scans using convolutional Long Short-Term Memory network for self-attention. Our method allows rapid quantification of pneumonia burden from CT with performance equivalent to expert readers. The mean dice score across 5 folds was 0.8776 with a standard deviation of 0.0095. A low standard deviation between results from each fold indicate the models were trained equally good regardless of the training fold. The cumulative per-patient mean dice score (0.8775±0.075) for N=167 patients, after concatenation, is consistent with the results from each of the 5 folds. We obtained excellent Pearson correlation (expert vs. automatic) of 0.9396 (p<0.0001) and 0.9843 (p<0.0001) between ground-glass opacity and high opacity volumes, respectively. Our model outperforms Unet2d (p<0.05) and Unet3d (p<0.05) in segmenting high opacities, has comparable performance with Unet2d in segmenting ground-glass opacities, and significantly outperforms Unet3d (p<0.0001) in segmenting ground-glass opacities. Our model performs faster on CPU and GPU when compared to Unet2d and Unet3d. For same number of input slices, our model consumed 0.83x and 0.26x the memory consumed by Unet2d and Unet3d. © 2022 SPIE

19.
19th IEEE International Symposium on Biomedical Imaging, ISBI 2022 ; 2022-March, 2022.
Article in English | Scopus | ID: covidwho-1846118

ABSTRACT

To aid clinicians diagnose diseases and monitor lesion conditions more efficiently, automated lesion segmentation is a convincing approach. As it is time-consuming and costly to obtain pixel-level annotations, weakly-supervised learning has become a promising trend. Recent works based on Class Activation Mapping (CAM) achieve success for natural images, but they have not fully utilized the intensity property in medical images such that the performance may not be good enough. In this work, we propose a novel weakly-supervised lesion segmentation framework with self-guidance by CT intensity clustering. The proposed method takes full advantages of the properties that CT intensity represents the density of materials and partitions pixels into different groups by intensity clustering. Clusters with high lesion probability determined by the CAM are selected to generate lesion masks. Such lesion masks are used to derive self-guided loss functions which improve the CAM for better lesion segmentation. Our method achieves the Dice score of 0.5874 on the COVID-19 dataset and 0.4534 on the Liver Tumor Segmentation Challenge (LiTS) dataset. © 2022 IEEE.

20.
19th IEEE International Symposium on Biomedical Imaging, ISBI 2022 ; 2022-March, 2022.
Article in English | Scopus | ID: covidwho-1846117

ABSTRACT

The spread of the novel coronavirus disease 2019 (COVID-19) has claimed millions of lives. Automatic segmentation of lesions from CT images can assist doctors with screening, treatment, and monitoring. However, accurate segmentation of lesions from CT images can be very challenging due to data and model limitations. Recently, Transformer-based networks have attracted a lot of attention in the area of computer vision, as Transformer outperforms CNN at a bunch of tasks. In this work, we propose a novel network structure that combines CNN and Transformer for the segmentation of COVID-19 lesions. We further propose an efficient semi-supervised learning framework to address the shortage of labeled data. Extensive experiments showed that our proposed network outperforms most existing networks and the semi-supervised learning framework can outperform the base network by 3.0% and 8.2% in terms of Dice coefficient and sensitivity. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL